Gradient Boost

Before moving forward with the to-do list, let’s throw a Random Forest to it.

Gradient boost

For many reasons, Random Forest is usually a very good baseline model. In this particular case I started with the polynomial OLS as baseline model, just because it was so evident from the correlations that the relationship between temperature and consumption follows a polynomial shape. But let’s go back to a beloved RF.

Model Cards provide a framework for transparent, responsible reporting. 
 Use the vetiver `.qmd` Quarto template as a place to start, 
 with vetiver.model_card()
Writing pin:
Name: 'wd-gb'
Version: 20250202T105403Z-49772
<vetiver.vetiver_model.VetiverModel at 0x7f776fd8eb10>

Metrics

Single Split CV
train test test train
MAE - Mean Absolute Error 1.428841 1.997698 2.419497 1.262800
MSE - Mean Squared Error 4.072073 9.900364 24.291431 3.004306
RMSE - Root Mean Squared Error 2.017938 3.146484 4.097602 1.732249
R2 - Coefficient of Determination 0.962003 0.866579 -1.075215 0.969405
MAPE - Mean Absolute Percentage Error 0.127380 0.192993 0.274971 0.111916
EVS - Explained Variance Score 0.962003 0.872715 -0.301597 0.969405
MeAE - Median Absolute Error 1.023071 1.231719 1.527174 0.938083
D2 - D2 Absolute Error Score 0.809121 0.684172 -0.239578 0.822541
Pinball - Mean Pinball Loss 0.714421 0.998849 1.209748 0.631400

Scatter plot matrix

Observed vs. Predicted and Residuals vs. Predicted

Check for …

check the residuals to assess the goodness of fit.

  • white noise or is there a pattern?
  • heteroscedasticity?
  • non-linearity?

Normality of Residuals:

Check for …

  • Are residuals normally distributed?

Leverage

Scale-Location plot

Residuals Autocorrelation Plot

Residuals vs Time

Again, overfits a lot.

Parameter: param_model__learning_rate

Parameter: param_model__max_depth

Parameter: param_model__min_samples_leaf

Parameter: param_model__min_samples_split

Parameter: param_model__n_estimators

Parameter: param_model__subsample

Parameter: param_vars__columns

Best model

{'model__learning_rate': 0.1,
 'model__max_depth': 5,
 'model__min_samples_leaf': 5,
 'model__min_samples_split': 48,
 'model__n_estimators': 60,
 'model__subsample': 1,
 'vars__columns': ['rf_tu_mean', 'vp_std_mean']}
Pipeline(steps=[('vars', ColumnSelector(columns=['rf_tu_mean', 'vp_std_mean'])),
                ('model',
                 GradientBoostingRegressor(max_depth=5, min_samples_leaf=5,
                                           min_samples_split=48,
                                           n_estimators=60, random_state=7,
                                           subsample=1))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Metrics

Single Split CV
train test test train
MAE - Mean Absolute Error 1.642678 1.981395 2.414268 1.499182
MSE - Mean Squared Error 7.348818 9.867181 22.917547 4.898252
RMSE - Root Mean Squared Error 2.710870 3.141207 3.942027 2.211838
R2 - Coefficient of Determination 0.931427 0.867026 -1.056060 0.950147
MAPE - Mean Absolute Percentage Error 0.136095 0.195423 0.296564 0.123097
EVS - Explained Variance Score 0.931427 0.873329 -0.087526 0.950147
MeAE - Median Absolute Error 1.073432 1.168755 1.683890 0.995940
D2 - D2 Absolute Error Score 0.780554 0.686750 -0.314293 0.789241
Pinball - Mean Pinball Loss 0.821339 0.990697 1.207134 0.749591

Scatter plot matrix

Observed vs. Predicted and Residuals vs. Predicted

Check for …

check the residuals to assess the goodness of fit.

  • white noise or is there a pattern?
  • heteroscedasticity?
  • non-linearity?

Normality of Residuals:

Check for …

  • Are residuals normally distributed?

Leverage

Scale-Location plot

Residuals Autocorrelation Plot

Residuals vs Time

Compare vanilla vs. tuned

Metrics

Single split

Metrics based on the test set of the single split

Cross validation

Predictions, residuals, observed

next

Time vs. Predicted and Observed

Time vs. Residuals

Model details

Pipeline(steps=[('vars',
                 ColumnSelector(columns=['tt_tu_mean', 'rf_tu_mean', 'td_mean',
                                         'vp_std_mean', 'tf_std_mean'])),
                ('model', GradientBoostingRegressor(random_state=7))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Pipeline(steps=[('vars', ColumnSelector(columns=['rf_tu_mean', 'vp_std_mean'])),
                ('model',
                 GradientBoostingRegressor(max_depth=5, min_samples_leaf=5,
                                           min_samples_split=48,
                                           n_estimators=60, random_state=7,
                                           subsample=1))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

TODOs